诸如医学诊断的关键背景下的关键问题是决策系统采用的深度学习模型的可解释性。解释的人工智能(XAI)在试图解决这个问题。然而,通常XAI方法仅在通用分类器上进行测试,并且不代表诸如医学诊断等现实问题。在本文中,我们分析了对皮肤病变图像的案例研究,我们定制了一种现有的XAI方法,以解释能够识别不同类型的皮肤病变的深度学习模型。通过综合示例和皮肤病变的相反示例图像形成的解释,并为从业者提供一种突出负责分类决策的关键性状的方法。通过域专家,初学者和非熟练的人进行了一项调查,证明了解释的使用增加了自动决策系统的信任和信心。此外,解释器采用的潜在空间的分析推出了一些最常见的皮肤病变类是明显分开的。这种现象可以得出每个班级的内在特征,希望能够在解决人类专家的最常见的错误分类中提供支持。
translated by 谷歌翻译
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
translated by 谷歌翻译